5 research outputs found
Embedding Non-Ground Logic Programs into Autoepistemic Logic for Knowledge Base Combination
In the context of the Semantic Web, several approaches to the combination of
ontologies, given in terms of theories of classical first-order logic and rule
bases, have been proposed. They either cast rules into classical logic or limit
the interaction between rules and ontologies. Autoepistemic logic (AEL) is an
attractive formalism which allows to overcome these limitations, by serving as
a uniform host language to embed ontologies and nonmonotonic logic programs
into it. For the latter, so far only the propositional setting has been
considered. In this paper, we present three embeddings of normal and three
embeddings of disjunctive non-ground logic programs under the stable model
semantics into first-order AEL. While the embeddings all correspond with
respect to objective ground atoms, differences arise when considering
non-atomic formulas and combinations with first-order theories. We compare the
embeddings with respect to stable expansions and autoepistemic consequences,
considering the embeddings by themselves, as well as combinations with
classical theories. Our results reveal differences and correspondences of the
embeddings and provide useful guidance in the choice of a particular embedding
for knowledge combination.Comment: 52 pages, submitte
Incidence and Combinatorial Properties of Linear Complexes
Dedicated to Helmut Karzel on the occasion of his 80th birthday Abstract. In this paper a generalisation of the notion of polarity is exhibited which allows to completely describe, in an incidence-geometric way, the linear complexes of h-subspaces. A generalised polarity is defined to be a partial map which maps (h−1)-subspaces to hyperplanes, satisfying suitable linearity and reciprocity properties. Generalised polarities with the null property give rise to a linear complexes and vice versa. Given that there exists for h> 1 a linear complex of h-subspaces which contains no star –this seems to be an open problem over an arbitrary ground field –the combinatorial structure of a partition of the line set of the projective space into non-geometric spreads of its hyperplanes can be obtained. This line partition has an additional linearity property which turns out to be characteristic
kernlab - An S4 package for kernel methods in R
kernlab is an extensible package for kernel-based machine learning methods in R. It takes advantage of R's new S4 object model and provides a framework for creating and using kernel-based algorithms. The package contains dot product primitives (kernels), implementations of support vector machines and the relevance vector machine, Gaussian processes, a ranking algorithm, kernel PCA, kernel CCA, and a spectral clustering algorithm. Moreover it provides a general purpose quadratic programming solver, and an incomplete Cholesky decomposition method. (author's abstract)Series: Research Report Series / Department of Statistics and Mathematic
Model Checking a Fault-Tolerant Startup Algorithm: From Design Exploration to Exhaustive Fault Simulation
The increasing performance of modern model-checking tools offers high potential for the computer-aided design of fault-tolerant algorithms. Instead of relying on human imagination to generate taxing failure scenarios to probe a fault-tolerant algorithm during development, we define the fault behavior of a faulty process at its interfaces to the remaining system and use model checking to automatically examine all possible failure scenarios. We call this approach "exhaustive fault simulation". In this paper we illustrate exhaustive fault simulation using a new startup algorithm for the Time-Triggered Architecture (TTA) and show that this approach is fast enough to be deployed in the design loop. We use the SAL toolset from SRI for our experiments and describe an approach to modeling and analyzing fault-tolerant algorithms that exploits the capabilities of tools such as this
The design and analysis of benchmark experiments
The assessment of the performance of learners by means of benchmark experiments is an established exercise. In practice, benchmark studies are a tool to compare the performance of several competing algorithms for a certain learning problem. Cross-validation or resampling techniques are commonly used to derive point estimates of the performances which are compared to identify algorithms with good properties. For several benchmarking problems, test procedures taking the variability of those point estimates into account have been suggested. Most of the recently proposed inference procedures are based on special variance estimators for the cross-validated performance. We introduce a theoretical framework for inference problems in benchmark experiments and show that standard statistical test procedures can be used to test for differences in the performances. The theory is based on well defined distributions of performance measures which can be compared with established tests. To demonstrate the usefulness in practice, the theoretical results are applied to regression and classification benchmark studies based on artificial and real world data